Eecient Stopping Rules for Markov Chains

نویسنده

  • Peter Winkler
چکیده

Let M be the transition matrix, and the initial state distribution , for a discrete-time nite-state irreducible Markov chain. A stopping rule for M is an algorithm which observes the progress of the chain and then stops it at some random time ?; the distribution of the nal state is denoted by ?. We give a useful characterization for stopping rules which are optimal for given target distribution , in the sense that ? = and the expected stopping time E? is minimal. Four classes of optimal stopping rules are described, including a unique \threshold" rule which also minimizes max(?). The minimum value of E?, which we denote by H(;), is easily computable from the hitting times of M. For applications in computing, the most important case is when is concentrated on a single starting state s and is the stationary distribution. We describe a simple, practical stopping rule which achieves target distribution close to in expected time of order Tmix = maxs H(s;). Finally, we give a stopping rule that runs in time polynomial in the maximum hitting time of M and achieves the stationary distribution exactly, even though the transition probabilities of the chain are unknown. Some of the work described herein is joint with David Aldous. 1 Introduction In the past ten years many new applications have been found (see, e.g., 3]) for sampling via Markov chains; these have resulted in polynomial-time randomized approximation schemes for computing the volume of a convex body 12] and counting combinatorial objects such as matchings 13], linear extensions 14], and Eulerian orientations 18]. Typically, in these applications, sampling from an approximately correct distribution is accomplished by walking randomly on a graph for a xed number of steps, after which the distribution of the occupied vertex is nearly stationary. There is no particular reason why such a walk must be run for a xed number of steps; in fact, more general stopping rules which \look where they are going" are capable

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Automated Stopping Rule for Mcmc Convergence Assessment

In this paper, we propose a methodology essentially based on the Central Limit Theorem for Markov chains to monitor convergence of MCMC algorithms using actual outputs. Our methods are grounded on the fact that normality is a testable implication of suucient mixing. The rst control tool tests the normality hypothesis for normalized averages of functions of the Markov chain over independent para...

متن کامل

Lecture notes on Stopping Times

Markov chains are awesome and can solve some important computational problems. A classic example is computing the volume of a convex body, where Markov chains and random sampling provide the only known polynomial time algorithm. Arguably the most important question about a Markov chain is how long to run it until it converges to its stationary distribution. Generally, a chain is run for some fi...

متن کامل

Compressing Redundant Information in Markov Chains

Given a strongly stationary Markov chain and a finite set of stopping rules, we prove the existence of a polynomial algorithm which projects the Markov chain onto a minimal Markov chain without redundant information. Markov complexity is hence defined and tested on some classical problems.

متن کامل

Compressing Oversized Information in Markov Chains

Given a strongly stationary Markov chain and a finite set of stopping rules, we prove the existence of a polynomial algorithm which projects the Markov chain onto a minimal Markov chain without redundant information. Markov complexity is hence defined and tested on some classical problems.

متن کامل

Stopping rule reversal for finite Markov chains

Consider a finite irreducible Markov chain with transition matrix M = (pij). Fixing a target distribution τ , we study a family of optimal stopping rules from the singleton distributions to τ . We show that this family of rules is dual to a family of (not necessarily optimal) rules on the reverse chain from the singleton distributions to a related distribution α̂ called the τ -contrast distribut...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995